This chapter describes how to use the Network Dispatcher Feature and contains the following sections:
Network Dispatcher uses load-balancing technology from IBM Research Division to determine the most appropriate server to receive each new connection. This is the same technology used in IBM's eNetwork Dispatcher product for Solaris, Windows NT and AIX.
Network Dispatcher is a feature that boosts the performance of servers by forwarding TCP/IP session requests to different servers within a group of servers, thus load balancing the requests among all servers. The forwarding is transparent to the users and to applications. Network Dispatcher is useful for server applications such as e-mail, World Wide Web servers, distributed parallel database queries, and other TCP/IP applications.
Network Dispatcher can also be used for load balancing stateless UDP application traffic to a group of servers.
Network Dispatcher can help maximize the potential of your site by providing a powerful, flexible, and scalable solution to peak-demand problems. During peak demand periods, Network Dispatcher can automatically find the optimal server to handle incoming requests.
The Network Dispatcher function does not use a domain name server for load balancing. It balances traffic among your servers through a unique combination of load balancing and management software. Network Dispatcher can also detect a failed server and forward traffic to other available servers.
All client requests sent to the Network Dispatcher machine are forwarded to the server that is selected by the Network Dispatcher as the optimal server according to certain dynamically set weights. You can use the default values for those weights or change the values during the configuration process.
The server sends a response back to the client without any involvement of Network Dispatcher. No additional software is required on your servers to communicate with Network Dispatcher.
The Network Dispatcher function is the key to stable, efficient management of a large, scalable network of servers. With Network Dispatcher, you can link many individual servers into what appears to be a single, virtual server. Your site thus appears as a single IP address to the world. Network Dispatcher functions independently of a domain name server; all requests are sent to the IP address of the Network Dispatcher machine.
Network Dispatcher allows a management application that is SNMP-based to monitor Network Dispatcher status by receiving basic statistics and potential alert situations. Refer to "SNMP Management" in the Protocol Configuration and Monitoring Reference Volume 1 for more information.
Network Dispatcher brings distinct advantages in load balancing traffic to clustered servers, resulting in stable and efficient management of your site.
There are many different approaches to load balancing. Some of these approaches allow users to choose a different server at random if the first server is slow or not responding. Another approach is round-robin, in which the domain name server selects a server to handle requests. This approach is better, but does not take into consideration the current load on the target server or even whether the target server is available.
Network Dispatcher can load balance requests to different servers based on the type of request, an analysis of the load on servers, or a configurable set of weights that you assign. To manage each different type of balancing, the Network Dispatcher has the following components:
Network Dispatcher supports advisors for FTP, HTTP, SMTP, NNTP, POP3, and Telnet as well as a TN3270 advisor that works with TN3270 servers in IBM 2210s, IBM 2212s, and IBM 2216s and an MVS advisor that works with Workload Manager (WLM) on MVS systems. WLM manages the amount of workload on an individual MVS ID. Network Dispatcher can use WLM to help load balance requests to MVS servers running OS/390 V1R3 or later release.
There are no protocol advisors specifically for UDP protocols. If you have MVS servers, you can use the MVS system advisor to provide server load information. Also, if the port is handling TCP and UDP traffic, the appropriate TCP protocol advisor can be used to provide advisor input for the port. Network Dispatcher will use this input in load balancing both TCP and UDP traffic on that port.
The manager is an optional component. However, if you do not use the manager, the Network Dispatcher will balance the load using a round-robin scheduling method based on the current server weights.
When using Network Dispatcher to load balance stateless UDP traffic, you must only use servers that respond to the client using the destination IP address from the request. See Configuring a Server for Network Dispatcher for a more complete explanation.
The base Network Dispatcher function has the following characteristics that makes it a single point of failure from many different perspectives:
All these characteristics make the following failures critical for the whole cluster:
In all these failure cases, which are not only Network Dispatcher failures but also Network Dispatcher neighborhood failures, all the existing connections are lost. Even with a backup Network Dispatcher running standard IP recovery mechanisms, recovery is, at best, slow and applies only to new connections. In the worst case, there is no recovery of the connections.
To improve Network Dispatcher availability, the Network Dispatcher High Availability function uses the following mechanisms:
Besides the basic criteria of failure detection, (the loss of connectivity between active and standby Network Dispatchers, detected through the Heartbeat messages) there is another failure detection mechanism named "reachability criteria." When you configure the Network Dispatcher, you provide a list of hosts that each of the Network Dispatchers should be able to reach to work correctly. The hosts could be routers, IP servers or other types of hosts. Host reachability is obtained by pinging the host.
Switchover takes place either if the Heartbeat messages cannot go through, or if the reachability criteria are no longer met by the active Network Dispatcher and the standby Network Dispatcher is reachable. To make the decision based on all available information, the active Network Dispatcher regularly sends the standby Network Dispatcher its reachability capabilities. The standby Network Dispatcher then compares the capabilities with its own and decides whether to switch.
The primary and backup Network Dispatchers keep their databases synchronized through the "Heartbeat" mechanism. The Network Dispatcher database includes connection tables, reachability tables and other information. The Network Dispatcher High Availability function uses a database synchronization protocol that insures that both Network Dispatchers contain the same connection table entries. This synchronization takes into account a known error margin for transmission delays. The protocol performs an initial synchronization of databases and then maintains database synchronization through periodic updates.
In the case of a Network Dispatcher failure, the IP takeover mechanism will promptly direct all traffic toward the standby Network Dispatcher. The Database Synchronization mechanism insures that the standby has the same entries as the active Network Dispatcher. When the failure occurs in the network (any intermediate piece of hardware or software between the client and the back-end server), and there is an alternate path through the standby Network Dispatcher that works, the switchover is performed across the alternate path.
Note: | Cluster IP Addresses are assumed to be on the same logical subnet as the previous hop router (IP router). |
The IP Router will resolve the cluster address through the ARP protocol. To perform the IP takeover, the Network Dispatcher (standby becoming active) will issue an ARP request to itself, that is broadcasted to all directly attached networks belonging to the logical subnet of the cluster. The previous hops' IP router will update their ARP tables (according to RFC826) to send all traffic for that cluster to the new active (previously standby) Network Dispatcher.
There are many ways that you can configure Network Dispatcher to support your site. If you have only one host name for your site to which all of your customers will connect, you can define a single cluster and any ports to which you want to receive connections. This configuration is shown in Figure 5.
Figure 5. Example of Network Dispatcher Configured With a Single Cluster and 2 Ports
Another way of configuring Network Dispatcher would be necessary if your site does content hosting for several companies or departments, each one coming into your site with a different URL. In this case, you might want to define a cluster for each company or department and any ports to which you want to receive connections at that URL as shown in Figure 6.
Figure 6. Example of Network Dispatcher Configured With 3 Clusters and 3 URLs
A third way of configuring Network Dispatcher would be appropriate if you have a very large site with many servers dedicated to each protocol supported. For example, you may choose to have separate FTP servers with direct T3 lines for large downloadable files. In this case, you might want to define a cluster for each protocol with a single port but many servers as shown in Figure 7.
Figure 7. Example of Network Dispatcher Configured with 3 Clusters and 3 Ports
Before configuring Network Dispatcher:
If high availability is important for your network, a typical high availability configuration is shown in Figure 8.
Figure 8. High Availability Network Dispatcher Configuration
To configure Network Dispatcher on a IBM 2210:
Note: | Cluster IP Addresses must not match the internal IP address of the router and must not match any interface IP addresses defined on the router. |
Notes:
If you are configuring the Network Dispatcher for high availability, continue with the following steps. Otherwise, you have completed the configuration.
Note: | Perform these steps on the primary Network Dispatcher and then on the backup. To ensure proper database synchronization, the executor in the primary Network Dispatcher must be enabled before the executor in the backup. |
You can change the configuration using the set, remove, and disable commands. See "Configuring and Monitoring the Network Dispatcher Feature" for more information about these commands.
To configure a server for use with Network Dispatcher:
For the TCP and UDP servers to work, you must set (or preferably alias) the loopback device (usually called lo0) to the cluster address. Network Dispatcher does not change the destination IP address in the IP packet before forwarding the packet to a server machine. When you set or alias the loopback device to the cluster address, the server machine will accept a packet that was addressed to the cluster address.
It is important that the server use the cluster address rather than its own IP address to respond to the client. This is not a concern with TCP servers, but some UDP servers use their own IP address when they respond to requests that were sent to the cluster address. When the server uses its own IP address, some clients will discard the server's response because it is not from an expected source IP address. You should use only UDP servers that use the destination IP address from the request when they respond to the client. In this case, the destination IP address from the request is the cluster address.
If you have an operating system that supports network interface aliasing such as AIX, Solaris, or Windows NT, you should alias the loopback device to the cluster address. The benefit of using an operating system that supports aliases is that you can configure the server machines to serve multiple cluster addresses.
If you have a server with an operating system that does not support aliases, such as HP-UX and OS/2, you must set lo0 to the cluster address.
If your server is an MVS system running TCP/IP V3R2, you must set the VIPA address to the cluster address. This will function as a loopback address. The VIPA address must not belong to a subnet that is directly connected to the MVS node. If your MVS system is running TCP/IP V3R3, you must set the loopback device to the cluster address. If you are using high availability, you must enable RouteD in the MVS system so that the high availability takeover mechanism will function properly.
Note: | The commands listed in this chapter were tested on the following operating systems and levels: AIX 4.1.5 and 4.2, HP-UX 10.2.0, Linux, OS/2 Warp Connect Version 3.0, OS/2 Warp Version 4.0, Solaris 2.5 (Sun OS 5.5), and Windows NT 3.51 and 4.0. |
Use the command for your operating system as shown in Table 10 to set or alias the loopback device.
Table 10. Commands to alias the loopback device (lo0) for Dispatcher
System | Command | ||
---|---|---|---|
AIX | ifconfig lo0 alias cluster_address | ||
HP-UX | ifconfig lo0 cluster_address | ||
Linux | ifconfig lo:1 cluster_address netmask up | ||
OS/2 | ifconfig lo cluster_address | ||
Solaris | ifconfig lo0:1 cluster_address 127.0.0.1 up | ||
Windows NT |
|
On some operating systems a default route may have been created and needs to be removed.
Active Routes: Network Address Netmask Gateway Address Interface Metric 0.0.0.0 0.0.0.0 9.67.128.1 9.67.133.67 1 9.0.0.0 255.0.0.0 9.67.133.158 9.67.133.158 1 9.67.128.0 255.255.248.0 9.67.133.67 9.67.133.67 1 9.67.133.67 255.255.255.255 127.0.0.1 127.0.0.1 1 9.67.133.158 255.255.255.255 127.0.0.1 127.0.0.1 1 9.255.255.255 255.255.255.255 9.67.133.67 9.67.133.67 1 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 224.0.0.0 224.0.0.0 9.67.133.158 9.67.133.158 1 224.0.0.0 224.0.0.0 9.67.133.67 9.67.133.67 1 255.255.255.255 255.255.255.255 9.67.133.67 9.67.133.67 1
9.0.0.0 255.0.0.0 9.67.133.158 9.67.133.158 1
Use the command from Table 11 for your operating system to delete any extra routes.
Table 11. Commands to Delete Routes for Various Operating Systems
Operating System | Command | ||
---|---|---|---|
AIX | route delete -net network_address cluster_address | ||
HP-Unix | route delete cluster_address cluster_address | ||
Solaris | No need to delete route. | ||
OS/2 | No need to delete route. | ||
Windows NT | route delete network_address
cluster_address
|
Network Dispatcher can be used with a cluster of 2210s, 2212s, Network Utilities or 2216s running TN3270 server function to provide TN3270E server support for large 3270 environments. The TN3270 advisor allows the Network Dispatcher to collect load statistics from each TN3270E server in real time to achieve the best possible distribution among the TN3270 servers. In addition to the TN3270 servers external to the Network Dispatcher router, one of the TN3270 servers in the cluster can be internal - it can run in the same router as Network Dispatcher.
Configuration of the TN3270E servers is essentially the same whether or not you have a Network Dispatcher in front of the servers. In fact, the TN3270E server is unaware that the traffic from the clients is being dispatched through another machine. However, there are some points to keep in mind when setting up the external TN3270 servers for use with Network Dispatcher:
When the TN3270 server is in the same router as Network Dispatcher, the following applies:
Special care has to be taken for explicit LU definition in a Network Dispatcher environment. A session request for either a implicit or a explicit LU can be dispatched to any server. This means that the explicit LU has to be defined in each server, since it is not known in advance to which server the session will be dispatched.
This section describes using Network Dispatcher with Scaleable High Availability Cache (SHAC) and Figure 9 shows a diagram of a SHAC in a network. Scaleable High Availability Cache (SHAC) consists of a group of Web Server Caches plus a separate Network Dispatcher; the caches are configured as servers in the Network Dispatcher. The caches in a group share a common cluster and port. The identical cluster and port values are programmed in Network Dispatcher. The mode of the port is set to extcache to indicate that it feeds an external scalable cache array. See the add port command in Add.
Note: | More than one cache group may be located in the Network Dispatcher. |
The advisor and manager are critical to SHAC. The HTTP advisor must be enabled on any ports for which there are SHACs. The advisor queries are used to determine whether the configured caches are operational. Initially, at connection establishment time connections are routed to caches based on the manager. Therefore, the manager proportions should be set to incorporate the advisor. This is important if the caches become enabled or disabled.
Like other servers, the interface IP addresses of the caches are used for the server addresses. Figure 9 shows an example. It includes the important IP addresses, netmasks, and routing information. In practice, most clients would be located on the Internet. In any case, the route from client to caches must go through the Network Dispatcher (therefore, clients cannot be attached to the 113 ring in the picture).
Figure 9. Two caches with Network Dispatcher, client and backend server